Blog
/

Email

/
February 1, 2021

Explore AI Email Security Approaches with Darktrace

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
01
Feb 2021
Stay informed on the latest AI approaches to email security. Explore Darktrace's comparisons to find the best solution for your cybersecurity needs!

Innovations in artificial intelligence (AI) have fundamentally changed the email security landscape in recent years, but it can often be hard to determine what makes one system different to the next. In reality, under that umbrella term there exists a significant distinction in approach which may determine whether the technology provides genuine protection or simply a perceived notion of defense.

One backward-looking approach involves feeding a machine thousands of emails that have already been deemed to be malicious, and training it to look for patterns in these emails in order to spot future attacks. The second approach uses an AI system to analyze the entirety of an organization’s real-world data, enabling it to establish a notion of what is ‘normal’ and then spot subtle deviations indicative of an attack.

In the below, we compare the relative merits of each approach, with special consideration to novel attacks that leverage the latest news headlines to bypass machine learning systems trained on data sets. Training a machine on previously identified ‘known bads’ is only advantageous in certain, specific contexts that don’t change over time: to recognize the intent behind an email, for example. However, an effective email security solution must also incorporate a self-learning approach that understands ‘normal’ in the context of an organization in order to identify unusual and anomalous emails and catch even the novel attacks.

Signatures – a backward-looking approach

Over the past few decades, cyber security technologies have looked to mitigate risk by preventing previously seen attacks from occurring again. In the early days, when the lifespan of a given strain of malware or the infrastructure of an attack was in the range of months and years, this method was satisfactory. But the approach inevitably results in playing catch-up with malicious actors: it always looks to the past to guide detection for the future. With decreasing lifetimes of attacks, where a domain could be used in a single email and never seen again, this historic-looking signature-based approach is now being widely replaced by more intelligent systems.

Training a machine on ‘bad’ emails

The first AI approach we often see in the wild involves harnessing an extremely large data set with thousands or millions of emails. Once these emails have come through, an AI is trained to look for common patterns in malicious emails. The system then updates its models, rules set, and blacklists based on that data.

This method certainly represents an improvement to traditional rules and signatures, but it does not escape the fact that it is still reactive, and unable to stop new attack infrastructure and new types of email attacks. It is simply automating that flawed, traditional approach – only instead of having a human update the rules and signatures, a machine is updating them instead.

Relying on this approach alone has one basic but critical flaw: it does not enable you to stop new types of attacks that it has never seen before. It accepts that there has to be a ‘patient zero’ – or first victim – in order to succeed.

The industry is beginning to acknowledge the challenges with this approach, and huge amounts of resources – both automated systems and security researchers – are being thrown into minimizing its limitations. This includes leveraging a technique called “data augmentation” that involves taking a malicious email that slipped through and generating many “training samples” using open-source text augmentation libraries to create “similar” emails – so that the machine learns not only the missed phish as ‘bad’, but several others like it – enabling it to detect future attacks that use similar wording, and fall into the same category.

But spending all this time and effort into trying to fix an unsolvable problem is like putting all your eggs in the wrong basket. Why try and fix a flawed system rather than change the game altogether? To spell out the limitations of this approach, let us look at a situation where the nature of the attack is entirely new.

The rise of ‘fearware’

When the global pandemic hit, and governments began enforcing travel bans and imposing stringent restrictions, there was undoubtedly a collective sense of fear and uncertainty. As explained previously in this blog, cyber-criminals were quick to capitalize on this, taking advantage of people’s desire for information to send out topical emails related to COVID-19 containing malware or credential-grabbing links.

These emails often spoofed the Centers for Disease Control and Prevention (CDC), or later on, as the economic impact of the pandemic began to take hold, the Small Business Administration (SBA). As the global situation shifted, so did attackers’ tactics. And in the process, over 130,000 new domains related to COVID-19 were purchased.

Let’s now consider how the above approach to email security might fare when faced with these new email attacks. The question becomes: how can you train a model to look out for emails containing ‘COVID-19’, when the term hasn’t even been invented yet?

And while COVID-19 is the most salient example of this, the same reasoning follows for every single novel and unexpected news cycle that attackers are leveraging in their phishing emails to evade tools using this approach – and attracting the recipient’s attention as a bonus. Moreover, if an email attack is truly targeted to your organization, it might contain bespoke and tailored news referring to a very specific thing that supervised machine learning systems could never be trained on.

This isn’t to say there’s not a time and a place in email security for looking at past attacks to set yourself up for the future. It just isn’t here.

Spotting intention

Darktrace uses this approach for one specific use which is future-proof and not prone to change over time, to analyze grammar and tone in an email in order to identify intention: asking questions like ‘does this look like an attempt at inducement? Is the sender trying to solicit some sensitive information? Is this extortion?’ By training a system on an extremely large data set collected over a period of time, you can start to understand what, for instance, inducement looks like. This then enables you to easily spot future scenarios of inducement based on a common set of characteristics.

Training a system in this way works because, unlike news cycles and the topics of phishing emails, fundamental patterns in tone and language don’t change over time. An attempt at solicitation is always an attempt at solicitation, and will always bear common characteristics.

For this reason, this approach only plays one small part of a very large engine. It gives an additional indication about the nature of the threat, but is not in itself used to determine anomalous emails.

Detecting the unknown unknowns

In addition to using the above approach to identify intention, Darktrace uses unsupervised machine learning, which starts with extracting and extrapolating thousands of data points from every email. Some of these are taken directly from the email itself, while others are only ascertainable by the above intention-type analysis. Additional insights are also gained from observing emails in the wider context of all available data across email, network and the cloud environment of the organization.

Only after having a now-significantly larger and more comprehensive set of indicators, with a more complete description of that email, can the data be fed into a topic-indifferent machine learning engine to start questioning the data in millions of ways in order to understand if it belongs, given the wider context of the typical ‘pattern of life’ for the organization. Monitoring all emails in conjunction allows the machine to establish things like:

  • Does this person usually receive ZIP files?
  • Does this supplier usually send links to Dropbox?
  • Has this sender ever logged in from China?
  • Do these recipients usually get the same emails together?

The technology identifies patterns across an entire organization and gains a continuously evolving sense of ‘self’ as the organization grows and changes. It is this innate understanding of what is and isn’t ‘normal’ that allows AI to spot the truly ‘unknown unknowns’ instead of just ‘new variations of known bads.’

This type of analysis brings an additional advantage in that it is language and topic agnostic: because it focusses on anomaly detection rather than finding specific patterns that indicate threat, it is effective regardless of whether an organization typically communicates in English, Spanish, Japanese, or any other language.

By layering both of these approaches, you can understand the intention behind an email and understand whether that email belongs given the context of normal communication. And all of this is done without ever making an assumption or having the expectation that you’ve seen this threat before.

Years in the making

It’s well established now that the legacy approach to email security has failed – and this makes it easy to see why existing recommendation engines are being applied to the cyber security space. On first glance, these solutions may be appealing to a security team, but highly targeted, truly unique spear phishing emails easily skirt these systems. They can’t be relied on to stop email threats on the first encounter, as they have a dependency on known attacks with previously seen topics, domains, and payloads.

An effective, layered AI approach takes years of research and development. There is no single mathematical model to solve the problem of determining malicious emails from benign communication. A layered approach accepts that competing mathematical models each have their own strengths and weaknesses. It autonomously determines the relative weight these models should have and weighs them against one another to produce an overall ‘anomaly score’ given as a percentage, indicating exactly how unusual a particular email is in comparison to the organization’s wider email traffic flow.

It is time for email security to well and truly drop the assumption that you can look at threats of the past to predict tomorrow’s attacks. An effective AI cyber security system can identify abnormalities with no reliance on historical attacks, enabling it to catch truly unique novel emails on the first encounter – before they land in the inbox.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Dan Fein
VP, Product

Based in New York, Dan joined Darktrace’s technical team in 2015, helping customers quickly achieve a complete and granular understanding of Darktrace’s product suite. Dan has a particular focus on Darktrace/Email, ensuring that it is effectively deployed in complex digital environments, and works closely with the development, marketing, sales, and technical teams. Dan holds a Bachelor’s degree in Computer Science from New York University.

Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

September 26, 2024

/

Inside the SOC

Thread Hijacking: How Attackers Exploit Trusted Conversations to Infiltrate Networks

Default blog imageDefault blog image

What is Thread Hijacking?

Cyberattacks are becoming increasingly stealthy and targeted, with malicious actors focusing on high-value individuals to gain privileged access to their organizations’ digital environments. One technique that has gained prominence in recent years is thread hijacking. This method allows attackers to infiltrate ongoing conversations, exploiting the trust within these threads to access sensitive systems.

Thread hijacking typically involves attackers gaining access to a user’s email account, monitoring ongoing conversations, and then inserting themselves into these threads. By replying to existing emails, they can send malicious links, request sensitive information, or manipulate the conversation to achieve their goals, such as redirecting payments or stealing credentials. Because such emails appear to come from a trusted source, they often bypass human security teams and traditional security filters.

How does threat hijacking work?

  1. Initial Compromise: Attackers first gain access to a user’s email account, often through phishing, malware, or exploiting weak passwords.
  2. Monitoring: Once inside, they monitor the user’s email threads, looking for ongoing conversations that can be exploited.
  3. Infiltration: The attacker then inserts themselves into these conversations, often replying to existing emails. Because the email appears to come from a trusted source within an ongoing thread, it bypasses many traditional security filters and raises less suspicion.
  4. Exploitation: Using the trust established in the conversation, attackers can send malicious links, request sensitive information, or manipulate the conversation to achieve their goals, such as redirecting payments or stealing credentials.

A recent incident involving a Darktrace customer saw a malicious actor attempt to manipulate trusted email communications, potentially exposing critical data. The attacker created a new mailbox rule to forward specific emails to an archive folder, making it harder for the customer to notice the malicious activity. This highlights the need for advanced detection and robust preventive tools.

Darktrace’s Self-Learning AI is able to recognize subtle deviations in normal behavior, whether in a device or a Software-as-a-Service (SaaS) user. This capability enables it to detect emerging attacks in their early stages. In this post, we’ll delve into the attacker’s tactics and illustrate how Darktrace / IDENTITY™ successfully identified and mitigated a thread hijacking attempt, preventing escalation and potential disruption to the customer’s network.

Threat hijacking attack overview & Darktrace coverage

On August 8, 2024, Darktrace detected an unusual email received by a SaaS account on a customer’s network. The email appeared to be a reply to a previous chain discussing tax and payment details, likely related to a transaction between the customer and one of their business partners.

Headers of the suspicious email received.
Figure 1: Headers of the suspicious email received.

A few hours later, Darktrace detected the same SaaS account creating a new mailbox rule named “.”, a tactic commonly used by malicious actors to evade detection when setting up new email rules [2]. This rule was designed to forward all emails containing a specific word to the user’s “Archives” folder. This evasion technique is typically used to move any malicious emails or responses to a rarely opened folder, ensuring that the genuine account holder does not see replies to phishing emails or other malicious messages sent by attackers [3].

Darktrace recognized the newly created email rule as suspicious after identifying the following parameters:

  • AlwaysDeleteOutlookRulesBlob: False
  • Force: False
  • MoveToFolder: Archive
  • Name: “.”
  • FromAddressContainsWords: [Redacted]
  • MarkAsRead: True
  • StopProcessingRules: True

Darktrace also noted that the user attempting to create this new email rule had logged into the SaaS environment from an unusual IP address. Although the IP was located in the same country as the customer and the ASN used by the malicious actor was typical for the customer’s network, the rare IP, coupled with the anomalous behavior, raised suspicions.

Figure 2: Hijacked SaaS account creating the new mailbox rule.

Given the suspicious nature of this activity, Darktrace’s Security Operations Centre (SOC) investigated the incident and alerted the customer’s security team of this incident.

Due to a public holiday in the customer's location (likely an intentional choice by the threat actor), their security team did not immediately notice or respond to the notification. Fortunately, the customer had Darktrace's Autonomous Response capability enabled, which allowed it to take action against the suspicious SaaS activity without human intervention.

In this instance, Darktrace swiftly disabled the seemingly compromised SaaS user for 24 hours. This action halted the spread of the compromise to other accounts on the customer’s SaaS platform and prevented any sensitive data exfiltration. Additionally, it provided the security team with ample time to investigate the threat and remove the user from their environment. The customer also received detailed incident reports and support through Darktrace’s Security Operations Support service, enabling direct communication with Darktrace’s expert Analyst team.

Conclusion

Ultimately, Darktrace’s anomaly-based detection allowed it to identify the subtle deviations from the user’s expected behavior, indicating a potential compromise on the customer’s SaaS platform. In this case, Darktrace detected a login to a SaaS platform from an unusual IP address, despite the attacker’s efforts to conceal their activity by using a known ASN and logging in from the expected country.

Despite the attempted SaaS hijack occurring on a public holiday when the customer’s security team was likely off-duty, Darktrace autonomously detected the suspicious login and the creation of a new email rule. It swiftly blocked the compromised SaaS account, preventing further malicious activity and safeguarding the organization from data exfiltration or escalation of the compromise.

This highlights the growing need for AI-driven security capable of responding to malicious activity in the absence of human security teams and detect subtle behavioral changes that traditional security tools.

Credit to: Ryan Traill, Threat Content Lead for his contribution to this blog

Appendices

Darktrace Model Detections

SaaS / Compliance / Anomalous New Email Rule

Experimental / Antigena Enhanced Monitoring from SaaS Client Block

Antigena / SaaS / Antigena Suspicious SaaS Activity Block

Antigena / SaaS / Antigena Email Rule Block

References

[1] https://blog.knowbe4.com/whats-the-best-name-threadjacking-or-man-in-the-inbox-attacks

[2] https://darktrace.com/blog/detecting-attacks-across-email-saas-and-network-environments-with-darktraces-combined-ai-approach

[3] https://learn.microsoft.com/en-us/defender-xdr/alert-grading-playbook-inbox-manipulation-rules

Continue reading
About the author
Maria Geronikolou
Cyber Analyst

Blog

/

September 26, 2024

/
No items found.

How AI can help CISOs navigate the global cyber talent shortage

Default blog imageDefault blog image

The global picture

4 million cybersecurity professionals are needed worldwide to protect and defend the digital world – twice the number currently in the workforce.1

Innovative technologies are transforming business operations, enabling access to new markets, personalized customer experiences, and increased efficiency. However, this digital transformation also challenges Security Operations Centers (SOCs) with managing and protecting a complex digital environment without additional resources or advanced skills.

At the same time, the cybersecurity industry is suffering a severe global skills shortage, leaving many SOCs understaffed and under-skilled. With a 72% increase in data breaches from 2021-20232, SOCs are dealing with overwhelming alert volumes from diverse security tools. Nearly 60% of cybersecurity professionals report burnout3, leading to high turnover rates. Consequently, only a fraction of alerts are thoroughly investigated, increasing the risk of undetected breaches. More than half of organizations that experienced breaches in 2024 admitted to having short-staffed SOCs.4

How AI can help organizations do more with less

Cyber defense needs to evolve at the same pace as cyber-attacks, but the global skills shortage is making that difficult. As threat actors increasingly abuse AI for malicious purposes, using defensive AI to enable innovation and optimization at scale is reshaping how organizations approach cybersecurity.

The value of AI isn’t in replacing humans, but in augmenting their efforts and enabling them to scale their defense capabilities and their value to the organization. With AI, cybersecurity professionals can operate at digital speed, analyzing vast data sets, identifying more vulnerabilities with higher accuracy, responding and triaging faster, reducing risks, and implementing proactive measures—all without additional staff.

Research indicates that organizations leveraging AI and automation extensively in security functions—such as prevention, detection, investigation, or response—reduced their average mean time to identify (MTTI) and mean time to contain (MTTC) data breaches by 33% and 43%, respectively. These organizations also managed to contain breaches nearly 100 days faster on average compared to those not using AI and automation.5

First, you've got to apply the right AI to the right security challenge. We dig into how different AI technologies can bridge specific skills gaps in the CISO’s Guide to Navigating the Cybersecurity Skills Shortage.

Cases in point: AI as a human force multiplier

Let’s take a look at just some of the cybersecurity challenges to which AI can be applied to scale defense efforts and relieve the burden on the SOC. We go further into real-life examples in our white paper.

Automated threat detection and response

AI enables 24/7 autonomous response, eliminating the need for after-hours SOC shifts and providing security leaders with peace of mind. AI can scale response efforts by analyzing vast amounts of data in real time, identifying anomalies, and initiating precise autonomous actions to contain incidents, which buys teams time for investigation and remediation.  

Triage and investigation

AI enhances the triage process by automatically categorizing and prioritizing security alerts, allowing cybersecurity professionals to focus on the most critical threats. It creates a comprehensive picture of an attack, helps identify its root cause, and generates detailed reports with key findings and recommended actions.  

Automation also significantly reduces overwhelming alert volumes and high false positive rates, enabling analysts to concentrate on high-priority threats and engage in more proactive and strategic initiatives.

Eliminating silos and improving visibility across the enterprise

Security and IT teams are overwhelmed by the technological complexity of operating multiple tools, resulting in manual work and excessive alerts. AI can correlate threats across the entire organization, enhancing visibility and eliminating silos, thereby saving resources and reducing complexity.

With 88% of organizations favoring a platform approach over standalone solutions, many are consolidating their tech stacks in this direction. This consolidation provides native visibility across clouds, devices, communications, locations, applications, people, and third-party security tools and intelligence.

Upskilling your existing talent in AI

As revealed in the State of AI Cybersecurity Survey 2024, only 26% of cybersecurity professionals say they have a full understanding of the different types of AI in use within security products.6

Understanding AI can upskill your existing staff, enhancing their expertise and optimizing business outcomes. Human expertise is crucial for the effective and ethical integration of AI. To enable true AI-human collaboration, cybersecurity professionals need specific training on using, understanding, and managing AI systems. To make this easier, the Darktrace ActiveAI Security Platform is designed to enable collaboration and reduce the learning curve – lowering the barrier to entry for junior or less skilled analysts.  

However, to bridge the immediate expertise gap in managing AI tools, organizations can consider expert managed services that take the day-to-day management out of the SOC’s hands, allowing them to focus on training and proactive initiatives.

Conclusion

Experts predict the cybersecurity skills gap will continue to grow, increasing operational and financial risks for organizations. AI for cybersecurity is crucial for CISOs to augment their teams and scale defense capabilities with speed, scalability, and predictive insights, while human expertise remains vital for providing the intuition and problem-solving needed for responsible and efficient AI integration.

If you’re thinking about implementing AI to solve your own cyber skills gap, consider the following:

  • Select an AI cybersecurity solution tailored to your specific business needs
  • Review and streamline existing workflows and tools – consider a platform-based approach to eliminate inefficiencies
  • Make use of managed services to outsource AI expertise
  • Upskill and reskill existing talent through training and education
  • Foster a knowledge-sharing culture with access to knowledge bases and collaboration tools

Interested in how AI could augment your SOC to increase efficiency and save resources? Read our longer CISO’s Guide to Navigating the Cybersecurity Skills Shortage.

And to better understand cybersecurity practitioners' attitudes towards AI, check out Darktrace’s State of AI Cybersecurity 2024 report.

References

  1. https://www.isc2.org/research  
  2. https://www.forbes.com/advisor/education/it-and-tech/cybersecurity-statistics/  
  3. https://www.informationweek.com/cyber-resilience/the-psychology-of-cybersecurity-burnout  
  4. https://www.ibm.com/downloads/cas/1KZ3XE9D  
  5. https://www.ibm.com/downloads/cas/1KZ3XE9D  
  6. https://darktrace.com/resources/state-of-ai-cyber-security-2024
Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI